17 research outputs found

    Flexible Composition of Robot Logic with Computer Vision Services

    Get PDF
    Vision-based robotics is an ever-growing field within industrial automation. Demands for greater flexibility and higher quality motivate manufacturing companies to adopt these technologies for such tasks as material handling, assembly, and inspection. In addition to the direct use in the manufacturing setting, robots combined with vision systems serve as highly flexible means for realization of prototyping test-beds in the R&D context.Traditionally, the problem areas of robotics and computer vision are attacked separately. An exception is the study of vision-based servo control, the focus of which constitutes control-theoretic aspects of vision-based robot guidance under assumption that robot joints can be controlled directly. The missing part is a systemic approach to implementing robotic application with vision sensing given industrial robots constrained by their programming interface. This thesis targets the development process of vision-based robotic systems in an event-driven environment. It focuses on design and composition of three functional components: (1) robot control function, (2) image acquisition function, and (3) image processing function. The thesis approaches its goal by a combination of laboratory results, a case study of an industrial company (Kongsberg Automotive AS), and formalization of computational abstractions and architectural solutions. The image processing function is tackled with the application of reactive pipelines. The proposed system development method allows for smooth transition from early-stage vision algorithm prototyping to the integration phase. The image acquisition function in this thesis is exposed in a service-oriented manner with the help of a flexible set of concurrent computational primitives. To realize control of industrial robots, a distributed architecture is devised, which supports composability of communication-heavy robot logic, as well as flexible coupling of the robot control node with vision services

    Event-driven industrial robot control architecture for the Adept V+ platform

    Get PDF
    Modern industrial robotic systems are highly interconnected. They operate in a distributed environment and communicate with sensors, computer vision systems, mechatronic devices, and computational components. On the fundamental level, communication and coordination between all parties in such distributed system are characterized by discrete event behavior. The latter is largely attributed to the specifics of communication over the network, which, in terms, facilitates asynchronous programming and explicit event handling. In addition, on the conceptual level, events are an important building block for realizing reactivity and coordination. Eventdriven architecture has manifested its effectiveness for building loosely-coupled systems based on publish-subscribe middleware, either general-purpose or robotic-oriented. Despite all the advances in middleware, industrial robots remain difficult to program in context of distributed systems, to a large extent due to the limitation of the native robot platforms. This paper proposes an architecture for flexible event-based control of industrial robots based on the Adept V+ platform. The architecture is based on the robot controller providing a TCP/IP server and a collection of robot skills, and a high-level control module deployed to a dedicated computing device. The control module possesses bidirectional communication with the robot controller and publish/subscribe messaging with external systems. It is programmed in asynchronous style using pyadept, a Python library based on Python coroutines, AsyncIO event loop and ZeroMQ middleware. The proposed solution facilitates integration of Adept robots into distributed environments and building more flexible robotic solutions with eventbased logic

    EPypes: a framework for building event-driven data processing pipelines

    Get PDF
    Many data processing systems are naturally modeled as pipelines, where data flows though a network of computational procedures. This representation is particularly suitable for computer vision algorithms, which in most cases possess complex logic and a big number of parameters to tune. In addition, online vision systems, such as those in the industrial automation context, have to communicate with other distributed nodes. When developing a vision system, one normally proceeds from ad hoc experimentation and prototyping to highly structured system integration. The early stages of this continuum are characterized with the challenges of developing a feasible algorithm, while the latter deal with composing the vision function with other components in a networked environment. In between, one strives to manage the complexity of the developed system, as well as to preserve existing knowledge. To tackle these challenges, this paper presents EPypes, an architecture and Python-based software framework for developing vision algorithms in a form of computational graphs and their integration with distributed systems based on publish-subscribe communication. EPypes facilitates flexibility of algorithm prototyping, as well as provides a structured approach to managing algorithm logic and exposing the developed pipelines as a part of online systems

    Calibration of robot vision systems for flexible assembly

    No full text
    In the contemporary competitive and fast-changing economy, the manufacturing enterprises require high degree of flexibility, to timely respond to the changing demands, and automation, to cope with the requirements of speed and quality. Industrial robots play a vital role in flexible assembly systems, and the application of machine vision in conjunction with the robotized systems constitutes a promising direction in the contemporary industrial automation. It is also of a big interest to understand the role of software in provision of assembly flexibility. In the area of robot vision systems calibration, more accurate, precise and repeatable calibration procedure are required. This master thesis applies systems approach and is divided into two main themes, namely Software-defined assembly flexibility and Robot vision systems calibration. The first theme is aimed at understanding the concept of assembly flexibility and the role of software in its provision. This topic is considered on a high magnifying level, and is investigated by the means of literature study, systems models building and discussion. It is concluded that the there is a lack of common understanding and agreed-upon taxonomy on flexibility in manufacturing. Certain flexibility types are identified as having the biggest reliance on software. The notion of softwaredefined assembly flexibility is proposed. Machine vision and holonic manufacturing control are identified as the main enabling technologies of software-defined assembly flexibility. The second theme is focused on robot vision systems calibration. It is viewed on a low magnifying level, and constitutes a practical undertaking aimed at studying the processes of camera calibration, stereo vision systems calibration and hand eye calibration, and improving the quality of the processes by maximizing their accuracy, precision and repeatability. A Python library FlexVi is developed and used for studying the calibration processes from the perspective of interactive computing and data analysis. The outcomes of camera calibration are analyzed on the basis of a large number of calibration experiments with the subsequent distribution fitting to find the most frequent values. A method for stereo vision systems calibration aimed at achieving high repeatability is developed. A method for outliers elimination for hand-eye calibration process is developed based on the analysis of the resulting precision of the object-to-base transformations measurement

    Analysis of Camera Calibration with Respect to Measurement Accuracy

    Get PDF
    Machine vision is used for applications such as automated inspection, process control and robot guidance, and is directly associated with increasing of manufacturing process flexibility. The presence of noise in image data affects robustness and accuracy of machine vision, which can be an obstacle for industrial applications. Accuracy depends on both feature detection, resulting in pixel values of the measures of interest, and vision systems calibration, which allows transforming pixel measurements into real-world coordinates. This paper analyzes the camera calibration process, and proposes a new method for camera calibration, based on numerical analysis of probability distributions of the calibration parameters and removal of outliers. The method can be used to improve accuracy and robustness of the vision systems calibration process

    Control of visually guided event-based behaviors in industrial robotic systems

    No full text
    Vision-based robotics is an ever-growing field within industrial automation. Demands for greater flexibility and higher quality motivate manufacturing companies to adopt these technologies for such tasks as material handling, assembly, and inspection. Manufacturing systems, as well as their control mechanism, are typically modeled as discrete event systems, and off-the-shelf PLC hardware is used for realization of sequential control systems. However, with introduction of robots and machine vision solutions, it becomes harder to reason about the combined systems collectively. Vi- sion algorithms require complex processing, and imaging setups have to be accurately calibrated with respect to other active systems such as robots, sensors, and material handling equipment. This thesis considers the application area of machine vision, and particularly vi- sual metrology systems, from the perspective of being a part of larger cyber-physical systems. This includes, in addition to the traditional computer vision algorithms and estimation methods, considerations of distributed system architecture and behavioral characteristics expressed with discrete event semantics. The ultimate goal is to make the first steps towards a theory backing control of visually-guided event-based be- haviors in industrial robotic and automation systems. The thesis approaches the goal by a combination of laboratory results, case study of an industrial company, and formalization of modeling abstractions and archi- tectural solutions. The practical results include a method for image analysis of star washers (small automotive parts), a feature engineering technique and machine learning experiments for classification of star washers’ orientation on a feeder, and a probabilistic analysis of the camera calibration process. In addition, to model and implement industrial vision systems in event-driven environment, a formalism of Discrete Event Data Flow is formulated, and the EPypes software framework is developed

    Discrete event dataflow as a formal approach to specification of industrial vision systems

    No full text
    The need for more flexible manufacturing systems stimulates the adoption of industrial robots in combination with intelligent computing resources and sophisticated sensing technologies. In this context, industrial vision systems play a role of inherently flexible sensing means that can be used for a variety of tasks within automated inspection, process control and robot guidance. When vision sensing is used within a large complex system, it is of particular importance to handle the complexity by introducing the appropriate formal methods. This paper overviews the challenges arising during design, implementation and application of industrial vision systems, and proposes an approach, dubbed Discrete Event Dataflow (DEDF), allowing to formally specify vision dataflow in the context of larger systems

    Event-driven industrial robot control architecture for the Adept V+ platform

    Get PDF
    Modern industrial robotic systems are highly interconnected. They operate in a distributed environment and communicate with sensors, computer vision systems, mechatronic devices, and computational components. On the fundamental level, communication and coordination between all parties in such distributed system are characterized by discrete event behavior. The latter is largely attributed to the specifics of communication over the network, which, in terms, facilitates asynchronous programming and explicit event handling. In addition, on the conceptual level, events are an important building block for realizing reactivity and coordination. Event-driven architecture has manifested its effectiveness for building loosely-coupled systems based on publish-subscribe middleware, either general-purpose or robotic-oriented. Despite all the advances in middleware, industrial robots remain difficult to program in context of distributed systems, to a large extent due to the limitation of the native robot platforms. This paper proposes an architecture for flexible event-based control of industrial robots based on the Adept V+ platform. The architecture is based on the robot controller providing a TCP/IP server and a collection of robot skills, and a high-level control module deployed to a dedicated computing device. The control module possesses bidirectional communication with the robot controller and publish/subscribe messaging with external systems. It is programmed in asynchronous style using pyadept, a Python library based on Python coroutines, AsyncIO event loop and ZeroMQ middleware. The proposed solution facilitates integration of Adept robots into distributed environments and building more flexible robotic solutions with event-based logic

    Flexible image acquisition service for distributed robotic systems

    No full text
    The widespread use vision systems in robotics introduces a number of challenges related to management of image acquisition and image processing tasks, as well as their coupling to the robot control function. With the proliferation of more distributed setups and flexible robotic architectures, the workflow of image acquisition needs to support a wider variety of communication styles and application scenarios. This paper presents FxIS, a flexible image acquisition service targeting distributed robotic systems with event-based communication. The principal idea a FxIS is in composition of a number of execution threads with a set of concurrent data structures, supporting acquisition from multiple cameras that is closely synchronized in time, both between the cameras and with the request timestamp

    Flexible Image Acquisition Service for Distributed Robotic Systems

    No full text
    The widespread use vision systems in robotics introduces a number of challenges related to management of image acquisition and image processing tasks, as well as their coupling to the robot control function. With the proliferation of more distributed setups and flexible robotic architectures, the workflow of image acquisition needs to support a wider variety of communication styles and application scenarios. This paper presents FxIS, a flexible image acquisition service targeting distributed robotic systems with event-based communication. The principal idea a FxIS is in composition of a number of execution threads with a set of concurrent data structures, supporting acquisition from multiple cameras that is closely synchronized in time, both between the cameras and with the request timestamp
    corecore